743 research outputs found

    Motion Controller Design for Cargo Ships

    Get PDF

    Design of Front-End for Recommendation Systems: Towards a Hybrid Architecture

    Get PDF
    To provide personalized online shopping suggestions, recommendation systems play an increasingly important role in “closing a transaction”. Some leading online movie sales platforms, such as Netflix and Rotten Tomatoes, have exploited content-based recommendation approaches. However, the issue of insufficient information about features in item profiles may lead to less accurate recommendations. In this paper, we propose a recommendation method known as Collective Intelligence Social Tagging (CIST), which combines a content-based recommendation approach with a social tagging function based on crowd-sourcing. We used an online movie sales platform as a use-case of how a CIST approach could increase the accuracy of recommended results and the overall user experience. In order t0 understand the feasibility and satisfaction level for CIST, we conducted fifteen design interviews to first determine user-developer perspectives on CIST, and then collected their overall design input

    Efficient Numerical Algorithm for Large-Scale Damped Natural Gradient Descent

    Full text link
    We propose a new algorithm for efficiently solving the damped Fisher matrix in large-scale scenarios where the number of parameters significantly exceeds the number of available samples. This problem is fundamental for natural gradient descent and stochastic reconfiguration. Our algorithm is based on Cholesky decomposition and is generally applicable. Benchmark results show that the algorithm is significantly faster than existing methods

    Hermite spectral method for the inelastic Boltzmann equation

    Full text link
    We propose a Hermite spectral method for the inelastic Boltzmann equation, which makes two-dimensional periodic problem computation affordable by the hardware nowadays. The new algorithm is based on a Hermite expansion, where the expansion coefficients for the VHS model are reduced into several summations and can be derived exactly. Moreover, a new collision model is built with a combination of the quadratic collision operator and a linearized collision operator, which helps us to balance the computational cost and the accuracy. Various numerical experiments, including spatially two-dimensional simulations, demonstrate the accuracy and efficiency of this numerical scheme

    Distilling word vectors from contextualised language models

    Get PDF
    Although contextualised language models (CLMs) have reduced the need for word embedding in various NLP tasks, static representations of word meaning remain crucial in tasks where words have to be encoded without context. Such tasks arise in domains such as information retrieval. Compared to learning static word embeddings from scratch, distilling such representations from CLMs has advantages in downstream tasks[68],[2]. Usually, the embedding of a word w is distilled by feeding random sentences that mention w to a CLM and extracting the parameters. In this research, we assume distilling word embeddings from CLMs can be improved by feeding more informative mentions to a CLM. Therefore, as a first contribution in this thesis, we proposed a strategy for sentence selection by using a topic model. Since distilling high-quality word embeddings from CLMs requires many mentions for each word, we investigate whether we can obtain decent word embeddings by using a few but carefully selected mentions of each word. As our second contribution, we explored a range of sentence selection strategies and tested their generated word embeddings on various evaluation tasks. We found that 20 informative sentences per word are sufficient to obtain competitive word embeddings, especially when the sentences are selected by our proposed strategies. Besides improving the sentence selection strategy, as our third contribution, we also studied other strategies for obtaining word embeddings. We found that SBERT embeddings capture an aspect of word meaning that is highly complementary to the mention embeddings we previously focused on. Therefore, we proposed combining the vectors generated from these two methods through a contrastive learning model. The results confirm that combining these vectors leads to more informative word embeddings. In conclusion, this thesis shows that better static word embeddings can be efficiently distilled from CLMs by strategically selecting sentences and combining complementary method
    • …
    corecore